Why we can't have 0.2
Floating points often get made fun of because they can't compute \(0.1 + 0.2\) accurately. This often gets explained as those numbers being inexpressible in a binary format.
We can define a floating point in a binary system with precision \(p\) as the set \(X\), where \(\alpha\) is the mantissa and \(\beta\) is the exponent, given natural numbers (\(0, 1, 2, 3, ...\)) \(\mathbb{N}_0\) (where \(\mathbb{N}_1\) starts at \(1\)) and integral numbers (\(..., -2, -1, 0, 1, 2, ...\)) \(\mathbb{Z}\):
Lets represent an annoying value such as \(0.2\). We need to find an \(\alpha\) and \(\beta\). Since it's smaller than \(1\), we know that \(\beta\lt0\). Additionally we derive the domain of the \(\alpha\)-term:
Such that:
To find \(\beta\), first find a power-of-two multiplier that gets it into the range of \([1,2\rangle\):
For \(\beta = -3\) we get:
To solve \(\alpha\), we look at the fractional part:
The only way to make \(\alpha \in \mathbb{N}_1\) is to multiply by a multiple of \(5\) (i.e. \(0.6 \cdot 5 = 3\)) which is impossible since there will be no \(5\) (a prime) in the factorization of \(2^n\) (power of prime), see fundamental theorem of arithmetic.